20 research outputs found

    Two-Handed Gesture Recognition

    Get PDF
    Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recognition for Human-Computer Interaction deals with one-handed gestures. But two-handed gestures can provide more efficient and easy to interact with user interfaces. It is particularly the case with two-handed gestures we do in the physical world, such as gestures to manipulate objects. It would be very valuable to permit to the user to interact with virtual objects in the same way that he/she interacts with physical ones. This paper presents a two-handed gesture database to manipulate virtual objects on the screen (mostly rotations) and some recognition experiment using Hidden Markov Models (HMMs). The results obtained with this state-of-the-art algorithm are really encouraging. These gestures would improve the interaction performance between the user and virtual reality applications

    Recognition of Isolated Complex Mono- and Bi-Manual 3D Hand Gestures

    Get PDF
    In this paper, we address the problem of the recognition of isolated complex mono- and bi-manual hand gestures. In the proposed system, hand gestures are represented by the 3D trajectories of blobs. Blobs are obtained by tracking colored body parts in real-time using the EM algorithm. In most of the studies on hand gestures, only small vocabularies have been used. In this paper, we study the results obtained on a more complex database of mono- and bi-manual gestures. These results are obtained by using a state-of-the-art sequence processing algorithm, namely Hidden Markov Models (HMMs), implemented within the framework of an open source machine learning library

    Hand Posture Classification and Recognition using the Modified Census Transform

    Get PDF
    Developing new techniques for human-computer interaction is very challenging. Vision-based techniques have the advantage of being unobtrusive and hands are a natural device that can be used for more intuitive interfaces. But in order to use hands for interaction, it is necessary to be able to recognize them in images. In this paper, we propose to apply to the hand posture classification and recognition tasks an approach that has been successfully used for face detection~\cite{Froba04}. The features are based on the Modified Census Transform and are illumination invariant. For the classification and recognition processes, a simple linear classifier is trained, using a set of feature lookup-tables. The database used for the experiments is a benchmark database in the field of posture recognition. Two protocols have been defined. We provide results following these two protocols for both the classification and recognition tasks. Results are very encouraging

    Reconnaissance de gestes 3D bi-manuels

    Get PDF
    Cet article présente une base de seize gestes dynamiques obtenus par suivi de différentes parties colorées du corps, suivi réalisé en temps réel par l'algorithme EM. Ces gestes réalisés avec une ou deux mains sont appris et reconnus avec des HMM. Les prétraitements effectués sur cette base de gestes ainsi que les résultats de classification sont présentés

    Two-handed gestures for human-computer interaction

    No full text
    The present thesis is concerned with the development and evaluation (in terms of accuracy and utility) of systems using hand postures and hand gestures for enhanced Human-Computer Interaction (HCI). In our case, these systems are based on vision techniques, thus only requiring cameras, and no other specific sensors or devices. When dealing with hand movements, it is necessary to distinguish two aspects of these hand movements: the static aspect and the dynamic aspect. The static aspect is characterized by a pose or configuration of the hand in an image and is related to the Hand Posture Recognition (HPR) problem. The dynamic aspect is defined either by the trajectory of the hand, or by a series of hand postures in a sequence of images. This second aspect is related to the Hand Gesture Recognition (HGR) task. Given the recognized lack of common evaluation databases in the HGR field, a first contribution of this thesis was the collection and public distribution of two databases, containing both one- and two-handed gestures, which part of the results reported here will be based upon. On these databases, we compare two state-of-the-art models for the task of HGR. As a second contribution, we propose a HPR technique based on a new feature extraction. This method has the advantage of being faster than conventional methods while yielding good performances. In addition, we provide comparison results of this method with other state-of-the-art technique. Finally, the most important contribution of this thesis lies in the thorough study of the state-of-the-art not only in HGR and HPR but also more generally in the field of HCI. The first chapter of the thesis provides an extended study of the state-of-the-art. The second chapter of this thesis contributes to HPR. We propose to apply for HPR a technique employed with success for face detection. This method is based on the Modified Census Transform (MCT) to extract relevant features in images. We evaluate this technique on an existing benchmark database and provide comparison results with other state-of-the-art approaches. The third chapter is related to HGR. In this chapter we describe the first recorded database, containing both one- and two-handed gestures in the 3D space. We propose to compare two models used with success in HGR, namely Hidden Markov Models (HMM) and Input-Output Hidden Markov Model (IOHMM). The fourth chapter is also focused on HGR but more precisely on two-handed gesture recognition. For that purpose, a second database has been recorded using two cameras. The goal of these gestures is to manipulate virtual objects on a screen. We propose to invesitigate on this second database the state-of-the-art sequence processing techniques we used in the previous chapter. We then discuss the results obtained using different features, and using images of one or two cameras. In conclusion, we propose a method for HPR based on new feature extraction. For HGR, we provide two databases and comparison results of two major sequence processing techniques. Finally, we present a complete survey on recent state-of-the-art techniques for both HPR and HGR. We also present some possible applications of these techniques, applied to two-handed gesture interaction. We hope this research will open new directions in the field of hand posture and gesture recognition

    Two-Handed Gesture Recognition

    No full text
    submitted for publication Abstract. Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recognition for Human-Computer Interaction deals with one-handed gestures. But two-handed gestures can provide more efficient and easy to interact with user interfaces. It is particularly the case with two-handed gestures we do in the physical world, such as gestures to manipulate objects. It would be very valuable to permit to the user to interact with virtual objects in the same way that he/she interacts with physical ones. This paper presents a two-handed gesture database to manipulate virtual objects on the screen (mostly rotations) and some recognition experiment using Hidden Markov Models (HMMs). The results obtained with this state-of-the-art algorithm are really encouraging. These gestures would improve the interaction performance between the user and virtual reality applications. 2 IDIAP–RR 05-24
    corecore